Distributed Newton Method for Large-Scale Consensus Optimization
نویسندگان
چکیده
منابع مشابه
A Distributed Newton Method for Large Scale Consensus Optimization
In this paper, we propose a distributed Newton method for consensus optimization. Our approach outperforms state-of-the-art methods, including ADMM. The key idea is to exploit the sparsity of the dual Hessian and recast the computation of the Newton step as one of efficiently solving symmetric diagonally dominant linear equations. We validate our algorithm both theoretically and empirically. On...
متن کاملA Stochastic Quasi-Newton Method for Large-Scale Optimization
The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasiNewton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust and sca...
متن کاملNewton - Raphson Consensus 1 for Distributed Convex Optimization
5 We address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework 6 where a network of agents, each endowed with a local private multidimensional convex cost and subject to communication constraints, 7 wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average 8 co...
متن کاملAsynchronous Newton-Raphson Consensus for Robust Distributed Convex Optimization
A general trend in the development of distributed 1 convex optimization procedures is to robustify existing algo2 rithms so that they can tolerate the characteristics and condi3 tions of communications among real devices. This manuscript 4 follows this tendency by robustifying a promising distributed 5 convex optimization procedure known as Newton-Raphson 6 consensus. More specifically, we modi...
متن کاملAsynchronous Newton-Raphson Consensus for Distributed Convex Optimization !
We consider the distributed unconstrained minimization of separable convex cost functions, where the global cost is given by the sum of several local and private costs, each associated to a specific agent of a given communication network. We specifically address an asynchronous distributed optimization technique called Newton-Raphson Consensus. Beside having low computational complexity, low co...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2019
ISSN: 0018-9286,1558-2523,2334-3303
DOI: 10.1109/tac.2019.2907711